Backpropagation Algorithm Adaptation Parameters Using Learning Automata

نویسندگان

  • Hamid Beigy
  • Mohammad Reza Meybodi
چکیده

Despite of the many successful applications of backpropagation for training multi-layer neural networks, it has many drawbocks. For complex problems it may require a long time to train the networks, and it may not train at all. Long training time can be the result of the non-optimal parameters. It is not easy to choose appropriate value of the parameters for a particular problem. In this paper, by interconnection of fixed structure learning automata (FSLA) to the feedforward neural networks, we apply learning automata (LA) scheme for adjusting these parameters based on the observation of random response of neural networks. The main motivation in using learning automata as an adaptation algorithm is to use its capability of global optimization when dealing with multi-modal surface. The feasibility of proposed method is shown through simulations on three learning problems: exclusive-or, encoding problem, and digit recognition. The simulation results show that the adaptation of these parameters using this method not only increases the convergence rate of learning but it increases the likelihood of escaping from the local minima.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (eta), momentum factor (alpha) and steepness parameter (lambda). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase...

متن کامل

New Classes of Learning Automata Based Schemes for Adaptation of Backpropagation

One popular learning algorithm for feedforward neural networks is the back.propagation (BP) algorithm which includes parameters: learning rate (1]), momentum factor (n) and steepness parameter (A.).The appropriate selections of these parameters have a large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed o...

متن کامل

Experimentation on Learning Automata Based Methods for Adaptation of BP Parameters

The backpropagation learning algorithm has number of parameters such learning rate (η), momentum factor (α) and steepness parameter (λ). whose values are not known in advance, and must be determined by trail and error. The appropriate selection of these parameters have large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed t...

متن کامل

Introducing an Adaptive VLR Algorithm Using Learning Automata for Multilayer Perceptron

One of the biggest limitations of BP algorithm is its low rate of convergence. The Variable Learning Rate (VLR) algorithm represents one of the well-known techniques that enhance the performance of the BP. Because the VLR parameters have important influence on its performance, we use learning automata (LA) to adjust them. The proposed algorithm named Adaptive Variable Learning Rate (AVLR) algor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • International journal of neural systems

دوره 11 3  شماره 

صفحات  -

تاریخ انتشار 2001